24 research outputs found

    A graphical simulator for modeling complex crowd behaviors

    Get PDF
    Abnormal crowd behaviors of varied real-world settings could represent or pose serious threat to public safety. The video data required for relevant analysis are often difficult to acquire due to security, privacy and data protection issues. Without large amounts of realistic crowd data, it is difficult to develop and verify crowd behavioral models, event detection techniques, and corresponding test and evaluations. This paper presented a synthetic method for generating crowd movements and tendency based on existing social and behavioral studies. Graph and tree searching algorithms as well as game engine-enabled techniques have been adopted in the study. The main outcomes of this research include a categorization model for entity-based behaviors following a linear aggregation approach; and the construction of an innovative agent-based pipeline for the synthesis of A-Star path-finding algorithm and an enhanced Social Force Model. A Spatial-Temporal Texture (STT) technique has been adopted for the evaluation of the model's effectiveness. Tests have highlighted the visual similarities between STTs extracted from the simulations and their counterparts - video recordings - from the real-world

    Extracting Spatio-temporal Texture Signatures for Crowd Abnormality Detection

    Get PDF
    In order to achieve automatic prediction and warning of hazardous crowd behaviors, a Spatio-Temporal Volume (STV) analysis method is proposed in this research to detect crowd abnormality recorded in CCTV streams. The method starts from building STV models using video data. STV slices – called Spatio-Temporal Textures (STT) - can then be analyzed to detect crowded regions. After calculating the Gray Level Co-occurrence Matrix (GLCM) among those regions, abnormal crowd behavior can be identified, including panic behaviors and other behavioral patterns. In this research, the proposed STT signatures have been defined and experimented on benchmarking video databases. The proposed algorithm has shown a promising accuracy and efficiency for detecting crowd-based abnormal behaviors. It has been proved that the STT signatures are suitable descriptors for detecting certain crowd events, which provide an encouraging direction for real-time surveillance and video retrieval applications

    An Approach to Detect Crowd Panic Behavior using Flow-based Feature

    Get PDF
    With the purpose of achieving automated detection of crowd abnormal behavior in public, this paper discusses the category of typical crowd and individual behaviors and their patterns. Popular image features for abnormal behavior detection are also introduced, including global flow based features such as optical flow, and local spatio-temporal based features such as Spatio-temporal Volume (STV). After reviewing some relative abnormal behavior detection algorithms, a brandnew approach to detect crowd panic behavior has been proposed based on optical flow features in this paper. During the experiments, all panic behaviors are successfully detected. In the end, the future work to improve current approach has been discussed

    An effective video processing pipeline for crowd pattern analysis

    Get PDF
    With the purpose of automatic detection of crowd patterns including abrupt and abnormal changes, a novel approach for extracting motion “textures” from dynamic Spatio-Temporal Volume (STV) blocks formulated by live video streams has been proposed. This paper starts from introducing the common approach for STV construction and corresponding Spatio-Temporal Texture (STT) extraction techniques. Next the crowd motion information contained within the random STT slices are evaluated based on the information entropy theory to cull the static background and noises occupying most of the STV spaces. A preprocessing step using Gabor filtering for improving the STT sampling efficiency and motion fidelity has been devised and tested. The technique has been applied on benchmarking video databases for proof-of-concept and performance evaluation. Preliminary results have shown encouraging outcomes and promising potentials for its real-world crowd monitoring and control applications

    A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    No full text
    Abstract Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods

    Wavelet Domain Multidictionary Learning for Single Image Super-Resolution

    Get PDF
    Image super-resolution (SR) aims at recovering the high-frequency (HF) details of a high-resolution (HR) image according to the given low-resolution (LR) image and some priors about natural images. Learning the relationship of the LR image and its corresponding HF details to guide the reconstruction of the HR image is needed. In order to alleviate the uncertainty in HF detail prediction, the HR and LR images are usually decomposed into 4 subbands after 1-level discrete wavelet transformation (DWT), including an approximation subband and three detail subbands. From our observation, we found the approximation subbands of the HR image and the corresponding bicubic interpolated image are very similar but the respective detail subbands are different. Therefore, an algorithm to learn 4 coupled principal component analysis (PCA) dictionaries to describe the relationship between the approximation subband and the detail subbands is proposed in this paper. Comparisons with various state-of-the-art methods by experiments showed that our proposed algorithm is superior to some related works

    Group Abnormal Behaviour Detection Algorithm Based on Global Optical Flow

    No full text
    Abnormal behaviour detection algorithm needs to conduct behaviour analysis on the basis of continuous video inclination tracking, and the robustness of the algorithm is reduced for the occlusion of moving targets, the occlusion of the environment, and the movement of targets with the same colour. For this reason, the optical flow information between RGB (red, green, and blue) images and video frames is used as the input of the network in view of group behaviour. Then, the direction, velocity, acceleration, and energy of the crowd were weighted and fused into a global optical flow descriptor. At the same time, the crowd trajectory map is extracted from the original image of a single frame. Following, in order to realize the detection of large displacement moving target and solve the problem that the traditional optical flow algorithm is only suitable for the detection of displacement moving target, a video abnormal behaviour detection algorithm based on the double-flow convolutional neural network is proposed. The network uses two network branches to learn spatial dimension information and temporal dimension information, respectively, and uses short- and long-time neural network to model the dependency relationship between long-time video frames, so as to obtain the final behaviour classification results. Simulation test results show that the proposed method can achieve good recognition effect on multiple datasets, and the performance of abnormal behaviour detection can be significantly improved by using interframe motion information

    Joint Semantic Intelligent Detection of Vehicle Color under Rainy Conditions

    No full text
    Color is an important feature of vehicles, and it plays a key role in intelligent traffic management and criminal investigation. Existing algorithms for vehicle color recognition are typically trained on data under good weather conditions and have poor robustness for outdoor visual tasks. Fine vehicle color recognition under rainy conditions is still a challenging problem. In this paper, an algorithm for jointly deraining and recognizing vehicle color, (JADAR), is proposed, where three layers of UNet are embedded into RetinaNet-50 to obtain joint semantic fusion information. More precisely, the UNet subnet is used for deraining, and the feature maps of the recovered clean image and the extracted feature maps of the input image are cascaded into the Feature Pyramid Net (FPN) module to achieve joint semantic learning. The joint feature maps are then fed into the class and box subnets to classify and locate objects. The RainVehicleColor-24 dataset is used to train the JADAR for vehicle color recognition under rainy conditions, and extensive experiments are conducted. Since the deraining and detecting modules share the feature extraction layers, our algorithm maintains the test time of RetinaNet-50 while improving its robustness. Testing on self-built and public real datasets, the mean average precision (mAP) of vehicle color recognition reaches 72.07%, which beats both sate-of-the-art algorithms for vehicle color recognition and popular target detection algorithms

    Joint Semantic Intelligent Detection of Vehicle Color under Rainy Conditions

    No full text
    Color is an important feature of vehicles, and it plays a key role in intelligent traffic management and criminal investigation. Existing algorithms for vehicle color recognition are typically trained on data under good weather conditions and have poor robustness for outdoor visual tasks. Fine vehicle color recognition under rainy conditions is still a challenging problem. In this paper, an algorithm for jointly deraining and recognizing vehicle color, (JADAR), is proposed, where three layers of UNet are embedded into RetinaNet-50 to obtain joint semantic fusion information. More precisely, the UNet subnet is used for deraining, and the feature maps of the recovered clean image and the extracted feature maps of the input image are cascaded into the Feature Pyramid Net (FPN) module to achieve joint semantic learning. The joint feature maps are then fed into the class and box subnets to classify and locate objects. The RainVehicleColor-24 dataset is used to train the JADAR for vehicle color recognition under rainy conditions, and extensive experiments are conducted. Since the deraining and detecting modules share the feature extraction layers, our algorithm maintains the test time of RetinaNet-50 while improving its robustness. Testing on self-built and public real datasets, the mean average precision (mAP) of vehicle color recognition reaches 72.07%, which beats both sate-of-the-art algorithms for vehicle color recognition and popular target detection algorithms
    corecore